Scalable Shared Memory Parallel Programming: Will One Size Fit All?
نویسنده
چکیده
In recent years, there has been much emphasis on improving the productivity of high-end parallel programmers. Efforts to design very large-scale platforms have focused on global address space machines that are capable of concurrently executing many thousands of threads. As a result, new higher level shared memory programming models have been proposed that are intended to reduce the programming effort and directly exploit the capabilities of such systems.
منابع مشابه
Simple, Fast and Scalable Parallel Algorithms for Shared Memory (Thesis Proposal)
To ease the transition into the multicore/manycore era, shared-memory programming must be made more natural and accessible to the community. Furthermore, shared-memory algorithms need to be fast and scalable in order to quickly process large data. In this proposed thesis we will study techniques for simplifying parallel programming and allowing users to easily write efficient and scalable algor...
متن کاملParallel Processing Using the Silicon Graphics / Cray Origin 2000
The Origin 2000 is a high performance computing platform produced jointly by Silicon Graphics / Cray. This scalable shared memory processor (SSMP) may be configured with up to 128 processors in a single system image. The Origin is a scalable, cache coherent, non-uniform memory access (CC-NUMA), distributed shared memory (DSM) architecture based on a hypercube interconnection topology. Effective...
متن کاملAtomic Section: Concept and Implementation
ABSTRACT A key source of complexity in parallel programming arises from ne-grained synchronization. In the shared memory programming language OpenMP, mutual exclusion for shared data access is achieved by critical sections or locks. The critical section degrades the performance by serializing all critical section instances through a global lock, and impedes a scalable parallelism by the underly...
متن کاملSCI-VM: A Flexible Base for Transparent Shared Memory Programming Models on Clusters of PCs
Clusters of PCs are traditionally programmed using the message passing paradigm as this is directly supported by their loosely coupled architecture. Shared memory programming is mostly neglected although it is commonly seen as the easier and more intuitive way of parallel programming. Based on the user-level remote memory capabilities of the Scalable Coherent Interface, this paper presents the ...
متن کاملAchieving Scalability in Parallel Tabled Logic Programs
Tabling or memoing is a technique where one stores intermediate answers to a problem so that they can be reused in further calls. Tabling is of interest to logic programming because it addresses some of most significant weaknesses of Prolog. Namely, it can guarantee termination for programs with the bounded term-size property. Tabled programs exhibit a more complex execution mechanism than trad...
متن کامل